Journal Search Engine

View PDF Download PDF Export Citation Korean Bibliography PMC Previewer
ISSN : 1738-1894(Print)
ISSN : 2288-5471(Online)
Journal of Nuclear Fuel Cycle and Waste Technology Vol.21 No.1 pp.165-173
DOI : https://doi.org/10.7733/jnfcwt.2023.002

Comparison of Numerical Analysis Methods of APro for the Total System Performance Assessment of a Geological Disposal System

Hyun Ho Cho*, Hong Jang, Dong Hyuk Lee, Jung-Woo Kim
Korea Atomic Energy Research Institute, 111, Daedeok-daero 989beon-gil, Yuseong-gu, Daejeon 34057, Republic of Korea
* Corresponding Author. Hyun Ho Cho, Korea Atomic Energy Research Institute, E-mail: h2joh33@kaeri.re.kr, Tel: +82-42-866-6205

August 5, 2022 ; September 20, 2022 ; October 14, 2023

Abstract


Various linear system solvers with multi-physics analysis schemes are compared focusing on the near-field region considering thermal-hydraulic-chemical (THC) coupled multi-physics phenomena. APro, developed at KAERI for total system performance assessment (TSPA), performs a finite element analysis with COMSOL, for which the various combinations of linear system solvers and multi-physics analysis schemes should to be compared. The KBS-3 type disposal system proposed by Sweden is set as the target system and the near-field region, which accounts for most of the computational burden is considered. For comparison of numerical analysis methods, the computing time and memory requirement are the main concerns and thus the simulation time is set up to one year. With a single deposition hole problem, PARDISO and GMRESSSOR are selected as representative direct and iterative solvers respectively. The performance of representative linear system solvers is then examined through a problem with an increasing number of deposition holes and the GMRES-SSOR solver with a segregated scheme shows the best performance with respect to the computing time and memory requirement. The results of the comparative analysis are expected to provide a good guideline to choose better numerical analysis methods for TSPA.



초록


    1. Introduction

    A geological disposal system is a multi-barrier system that consists of an engineered barrier system (EBS) and a natural barrier system (NBS), in order to contain high-level radioactive wastes including spent nuclear fuels in a stable geological formation and isolate them from the biosphere. The geological disposal system has the following characteristics from the viewpoint of safety assessment.

    • •Various thermal-hydraulic-mechanical-chemical (THMC) complex phenomena occur within the disposal system. To elaborate on the complex phenomena, the decay heat of the spent nuclear fuels, which is transferred by convection and conduction is considered for the thermal process. The groundwater flow and geochemical reactions in the groundwater are considered for the hydraulic and chemical processes, respectively, and the hydrodynamic pressure in the deep underground is considered for the mechanical process.

    • •The period of safety assessment of the disposal system should be set for a relatively long time, such as more than 0.l million years, in order to properly consider the toxicity of spent nuclear fuel.

    • •The dimensions of EBS components are on a meter scale, but the overall area of the disposal system may reach a several km2 scale. Specifically, it is estimated that about a 5 km2 disposal area is required in order to dispose of all the spent nuclear fuels expected to be generated in Korea [1].

    Despite such complexity and the spatio-temporal scale difference of the disposal system, the conventional approach for the safety assessment has been the system-level approach, which considers the characteristics implicitly due to the limited computing capacity. However, it has a fundamental shortcoming to consider coupled processes in the long-term evolution scenarios. In order to overcome this limit, Korea Atomic Energy Research Institute (KAERI) has developed APro, an Adaptive Process-based total system performance assessment framework for a geological disposal system [2]. Unlike the system-level approach, APro numerically simulates each of the processes for THMC complex phenomena using COMSOL [3], which is finite element method (FEM) based general purpose software for multi-physics analysis.

    The finite element analysis schemes provided by COMSOL solve a general sparse linear system of the form of Ax = b to obtain a numerical solution. Linear system solvers are classified into direct and iterative methods according to the scheme of finding a solution. The direct method utilizes the ‘lower upper triangular matrices (LU)’ decomposition method, which can obtain a robust solution without any approximation. However, in general, the LU decomposition of a sparse matrix induces a large amount of fill-in, which increases the memory requirement significantly. PARDISO [4] and MUMPS [5] are widely used direct solvers. The iterative method obtains the solution iteratively, where the residual defined as the difference between the current solution and previous solution would be lower than the criteria. In general, the iterative method has a smaller memory requirement than the direct method but can take longer time to obtain converged solutions. Conjugate gradient (CG) [6], biconjugate gradient stabilized (BiCGSTAB) [7], and generalized minimum residual (GMRES) [8] solvers are widely used iterative solvers.

    Fully-coupled and segregated schemes are used in order to solve coupled multi-physics phenomena. A fully-coupled scheme solves a number of physics modules simultaneously while a segregated scheme solves each module sequentially through iterative procedures. A fully-coupled scheme does not require an additional interface between physics modules and consequently numerical error does not appear, but large memory is required [3]. The segregated scheme has an advantage in the utilization of computing resources when a huge domain is modeled with coupled multi-physics phenomena.

    Although computing power has been continuously developed, the TSPA still accompanies a heavy computational burden due to the huge number of degree of freedoms (DOFs) and strong nonlinearity in a multi-physics analysis. In this regard, in order to properly consider the characteristics of the disposal system, i.e., complex phenomena, huge spatio-temporal scale, as previously mentioned, it is necessary to compare and examine the effect of the numerical analysis methods on the calculation efficiency. Therefore, in this study, various linear system solvers and multi-physics analysis schemes provided by COMSOL are compared with regard to the computing time and memory requirement.

    2. Methodology

    2.1 System Definition

    In APro, the KBS-3 type disposal system [9] proposed by Sweden is considered as the target system. In the KBS- 3 concept, several disposal tunnels are excavated at about 500 m underground and sets of deposition holes are located in the disposal tunnel. Canisters containing high-level radioactive wastes are disposed of in the deposition holes and surrounded by buffer materials. After all canisters are disposed of, the disposal tunnels are filled with backfill materials. All of these combined components are EBS, whose detailed information is presented in Table 1. The nearfield defines an area covering the EBS and surrounding bedrock as shown in Fig. 1, and the far-field covers the remaining part of the NBS.

    Table 1

    Detailed information for the components of EBS

    JNFCWT-21-1-165_T1.gif
    JNFCWT-21-1-165_F1.gif
    Fig. 1

    Unit near-field structure covering the EBS and surrounding bedrock.

    APro models THMC complex phenomena occurring in the disposal system with a modularization concept, which couples each phenomenon for a multi-physics analysis [2]. In this study, default modules considering the most fundamental phenomena are employed, and the details of the default modules are summarized in Table 2.

    Table 2

    Governing equations [2] and descriptions for the default modules of APro

    JNFCWT-21-1-165_T2.gif

    2.2 Numerical Analysis Methods

    Linear system solvers and multi-physics analysis schemes should be selected properly depending on the characteristics of the problem, i.e., the properties of a linear system. For instance, the computing time and convergence stability can be highly affected by the preconditioners for iterative solvers. Also, a segregated scheme appears to be suitable for the multi-physics analysis but there can be unnecessary iterations if the convergence rate differs significantly among physics.

    In general, the direct method solvers produce a robust solution with stable convergence while the iterative method solvers produce a solution with a lower memory requirement. In this study, PARDISO and MUMPS, direct solvers, coupled with multi-physics analysis schemes are compared. In the case of the iterative method, various preconditioners for the GMRES solver are examined. Incomplete LU (ILU) [10], Jacobi, symmetric successive over-relaxation (SSOR) [11], SOR line [12], algebraic multigrid (AMG) [13], and geometric multigrid (GMG) [13] preconditioners are compared.

    2.3 Problem Setting for Comparison

    Although the portion of EBS in the repository is lower, the computational burden of a numerical analysis is higher than that for the NBS because the THMC coupled phenomena mainly occur in the EBS due to the decay heat from spent nuclear fuels and groundwater flow into the buffer and backfill. Therefore, the near-field region is considered for comparing the solvers and schemes before a full-scale analysis. In this study, the default modules that had already been verified for THC coupled phenomena [2] are used. Notably, the accuracy of solutions should be estimated when the computing performance is compared according to the various linear system solvers with multi-physics analysis methods. However, since the computing time and memory requirement are the main concerns for comparing the performance of numerical analysis methods, the simulation time was set up as one year and the difference in the results by different solver options was negligible.

    For all the components of the near-field region, tetrahedral meshes are used according to the mesh generation criteria presented in Table 3. Since the disposal system has a repetitive structure, the problem size can be enlarged with duplication of a unit near-field area, as shown in Fig. 2. As the number of deposition holes is increased, the number of DOFs, which determines the size of linear system linearly increases, as shown in Fig. 3. In this regard, the memory requirement according to the repository size can be easily predicted by the relation between the number of DOFs and the number of deposition holes.

    Table 3

    Mesh generation information for the unit near-field structure

    JNFCWT-21-1-165_T3.gif
    JNFCWT-21-1-165_F2.gif
    Fig. 2

    Enlargement of problem size.

    JNFCWT-21-1-165_F3.gif
    Fig. 3

    Linear relation between the number of DOFs and the number of deposition holes.

    With the single deposition hole problem, various linear system solvers combined with multi-physics analysis schemes are compared with regard to the computing time and memory requirement. The performance of the representative linear system solver is then examined through a problem with an increasing number of deposition holes. All the calculation results are obtained with an on-premise cluster where 28 cores of Intel Xeon CPU E5-2680 v4 @ 2.4 GHz and 256 GB RAM per each computing node are used.

    3. Results and Discussion

    First, various combinations of numerical analysis methods are compared in terms of the computing time and memory requirement and the representative solvers for direct and iterative methods are selected.

    In the case of the direct method as shown in Fig. 4, both MUMPS and PARDISO solvers show a smaller memory requirement in the segregated scheme while the computing time shows little difference in both schemes. In the segregated scheme, the PARDISO solver shows better performance than the MUMPS solver and hence PARDISO is selected as a representative solver for the direct method.

    JNFCWT-21-1-165_F4.gif
    Fig. 4

    Comparison of the computing time and memory requirement for direct solvers where F indicates fully-coupled scheme and S indicates segregated scheme.

    In the case of the iterative method as shown in Fig. 5, it shows little difference in the memory requirement between the fully-coupled and segregated schemes. However, in the case of certain combinations such as GMRES-SSOR with the fully-coupled scheme, the computing time is shortened compared to the same combinations with the segregated scheme. Therefore, the GMRES-SSOR combination is selected as a representative solver for the iterative method.

    JNFCWT-21-1-165_F5.gif
    Fig. 5

    Comparison of the computing time and memory requirement according to the preconditioners for GMRES with fully-coupled scheme (left) and segregated scheme (right).

    In order to determine the computing resource requirement as the number of DOFs is increased, the computing time and memory requirement are estimated according to the number of deposition holes in a single disposal tunnel. With the chosen solvers, PARDISO and GMRES-SSOR, four cases of combination are examined according to the type of linear system solvers and numerical analysis methods.

    First, fully-coupled and segregated schemes are compared when a direct solver is used, as shown in Fig. 6. For the same number of deposition holes, the computing time shows only about a 10% difference but the memory requirement shows a significant difference. Note that the black dotted horizontal line indicates the RAM capacity. As a result, up to 27 deposition holes can be analyzed for the fully-coupled scheme, and 68 deposition holes can be analyzed for the segregated scheme. Therefore, it is shown that the segregated scheme is more advantageous in terms of memory usage.

    JNFCWT-21-1-165_F6.gif
    Fig. 6

    Comparison of the computing time and memory requirement of the direct solver according to the multi-physics analysis schemes.

    The iterative solver also shows the same tendency as the direct solver as shown in Fig. 7. The memory requirement linearly increases as the number of deposition holes is increased. As a result, 101 and 162 deposition holes can be analyzed with the fully-coupled and segregated schemes respectively. Therefore, regardless of the linear system solver type, a larger domain with a greater number of deposition holes can be analyzed with the segregated scheme.

    JNFCWT-21-1-165_F7.gif
    Fig. 7

    Comparison of the computing time and memory requirement of the iterative solver according to the multi-physics analysis schemes.

    In Fig. 8, the computing time and memory requirement are compared between direct and iterative solvers with the segregated scheme. The computing time shows little difference except for the case of 50 deposition holes but the iterative solver is much more efficient in terms of the memory requirement. In particular, as the number of deposition holes is increased, the discrepancy becomes larger. The memory requirement of the direct solver is 1.7 times larger than that of the iterative solver for the case of a single deposition hole, while the memory requirement of the direct solver is 2.3 times larger than that of the iterative solver for the case of 50 deposition holes.

    JNFCWT-21-1-165_F8.gif
    Fig. 8

    Comparison of the direct solver and iterative solver with segregated scheme in terms of the computing time and memory requirement.

    4. Conclusions

    In this work, various linear system solvers with multiphysics analysis methods are compared focusing on the near-field region considering THC coupled multi-physics phenomena. Linear system solvers provided by COMSOL are examined with a single deposition hole problem and PARDISO and GMRES-SSOR are chosen as a representative solver for direct and iterative methods respectively. With the selected linear system solvers, fully-coupled and segregated schemes are compared and it is found that the GMRES-SSOR solver with a segregated scheme shows the best performance with respect to the computing time and memory requirement. The results of the comparative analysis are expected to provide a useful guideline to choose better numerical analysis methods for TSPA. In further research, parallel computing strategies with a distributed memory system will be examined in order to deal with the large memory requirement of a real-scale TSPA problem.

    Acknowledgements

    This work was supported by the Institute for Korea Spent Nuclear Fuel (iKSNF) and National Research Foundation of Korea (NRF) grant funded by the Korea government (Ministry of Science and ICT, MSIT) (NRF-2021M2E1A1085185).

    Figures

    Tables

    References

    1. J. Lee, I. Kim, H. Ju, H. Choi, and D. Cho, “Proposal of an Improved Concept Design for the Deep Geological Disposal System of Spent Nuclear Fuel in Korea”, J. Nucl. Fuel Cycle Waste Technol., 18(spc), 1-19 (2020).
    2. J.W. Kim, H. Jang, D.H. Lee, H.H. Cho, J. Lee, M. Kim, and H. Ju, “A Modularized Numerical Framework for the Process-based Total System Performance Assessment of Geological Disposal Systems”, Nucl. Eng. Technol., 54(8), 2828-2839 (2022).
    3. COMSOL AB, COMSOL Multiphysics Reference Manual, Version: COMSOL 6.0, Stockholm, Sweden (2021).
    4. O. Schenk, K. Gärtner, W. Fichtner, and A. Stricker, “PARDISO: A High-Performance Serial and Parallel Sparse Linear Solver in Semiconductor Device Simulation”, Future Gener. Comput. Syst., 18(1), 69-78 (2001).
    5. P.R. Amestoy, I.S. Duff, and J.Y. L’Excellent, “Multifrontal Parallel Distributed Symmetric and Unsymmetric Solvers”, Comput. Methods Appl. Mech. Eng., 184(2-4), 501-520 (2000).
    6. M.R. Hestenes and E. Stiefel, “Methods of Conjugate Gradients for Solving Linear Systems”, J. Res. Natl. Bur. Stand., 49(6), 409-436 (1952).
    7. H.A. Van der Vorst, “Bi-CGSTAB: A Fast and Smoothly Converging Variant of Bi-CG for the Solution of Nonsymmetric Linear Systems”, SIAM J. Sci. Statist. Comput., 13(2), 631-644 (1992).
    8. Y. Saad and M.H. Schultz, “GMRES: A Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems”, SIAM J. Sci. Comput., 7(3), 856-869 (1986).
    9. Swedish Nuclear Fuel and Waste Management Co. Long-Term Safety for the Final Repository for Spent Nuclear Fuel at Forsmark-Main Report of the SR-Site Project Volume III, SKB Technical Report, TR-11-01 (2011).
    10. J.R. Gilbert and S. Toledo, “An Assessment of Incomplete- LU Preconditioners for Nonsymmetric Linear Systems”, Informatica, 24, 409-425 (2000).
    11. A. Hadjidimos, “Successive Overrelaxation (SOR) and Related Methods”, J. Comput. Appl. Math., 123(1-2), 177-199 (2000).
    12. V. John, “Higher Order Finite Element Methods and Multigrid Solvers in a Benchmark Problem for the 3D Navier-Stokes Equations”, Int. J. Numer. Methods Fluids, 40(6), 775-798 (2002).
    13. W. Hackbusch, Multi-Grid Methods and Applications, 1st ed., Springer-Verlag, Berlin (1985).

    Editorial Office
    Contact Information

    - Tel: +82-42-861-5851, 866-4157
    - Fax: +82-42-861-5852
    - E-mail: krs@krs.or.kr

    SCImago Journal & Country Rank